资源类型

期刊论文 3

年份

2022 1

2021 1

2018 1

关键词

人工智能;深度学习;可解释性模型 1

检索范围:

排序: 展示方式:

Structural performance assessment of GFRP elastic gridshells by machine learning interpretability methods

Soheila KOOKALANI; Bin CHENG; Jose Luis Chavez TORRES

《结构与土木工程前沿(英文)》 2022年 第16卷 第10期   页码 1249-1266 doi: 10.1007/s11709-022-0858-5

摘要: The prediction of structural performance plays a significant role in damage assessment of glass fiber reinforcement polymer (GFRP) elastic gridshell structures. Machine learning (ML) approaches are implemented in this study, to predict maximum stress and displacement of GFRP elastic gridshell structures. Several ML algorithms, including linear regression (LR), ridge regression (RR), support vector regression (SVR), K-nearest neighbors (KNN), decision tree (DT), random forest (RF), adaptive boosting (AdaBoost), extreme gradient boosting (XGBoost), category boosting (CatBoost), and light gradient boosting machine (LightGBM), are implemented in this study. Output features of structural performance considered in this study are the maximum stress as f1(x) and the maximum displacement to self-weight ratio as f2(x). A comparative study is conducted and the Catboost model presents the highest prediction accuracy. Finally, interpretable ML approaches, including shapely additive explanations (SHAP), partial dependence plot (PDP), and accumulated local effects (ALE), are applied to explain the predictions. SHAP is employed to describe the importance of each variable to structural performance both locally and globally. The results of sensitivity analysis (SA), feature importance of the CatBoost model and SHAP approach indicate the same parameters as the most significant variables for f1(x) and f2(x).

关键词: machine learning     gridshell structure     regression     sensitivity analysis     interpretability methods    

Novel interpretable mechanism of neural networks based on network decoupling method

《工程管理前沿(英文)》 2021年 第8卷 第4期   页码 572-581 doi: 10.1007/s42524-021-0169-x

摘要: The lack of interpretability of the neural network algorithm has become the bottleneck of its wide application. We propose a general mathematical framework, which couples the complex structure of the system with the nonlinear activation function to explore the decoupled dimension reduction method of high-dimensional system and reveal the calculation mechanism of the neural network. We apply our framework to some network models and a real system of the whole neuron map of Caenorhabditis elegans. Result shows that a simple linear mapping relationship exists between network structure and network behavior in the neural network with high-dimensional and nonlinear characteristics. Our simulation and theoretical results fully demonstrate this interesting phenomenon. Our new interpretation mechanism provides not only the potential mathematical calculation principle of neural network but also an effective way to accurately match and predict human brain or animal activities, which can further expand and enrich the interpretable mechanism of artificial neural network in the future.

关键词: neural networks     interpretability     dynamical behavior     network decouple    

深度学习中的视觉可解释性 Review

Quan-shi ZHANG, Song-chun ZHU

《信息与电子工程前沿(英文)》 2018年 第19卷 第1期   页码 27-39 doi: 10.1631/FITEE.1700808

摘要: 总结了近年来在理解神经网络内部特征表达和训练一个具有中层表达可解释性的深度神经网络上的相关研究工作。虽然深度神经网络在众多人工智能任务中已有杰出表现,但神经网络中层表达的可解释性依然是该领域发展的重大瓶颈。目前,深度神经网络以低解释性的黑箱表达为代价,获取了强大的分类能力。我们认为提高神经网络中层特征表达的可解释性,可以帮助人们打破众多深度学习的发展瓶颈,比如,小数据训练,语义层面上的人机交互式训练,以及基于内在特征语义定向精准修复网络中层特征表达缺陷等难题。本文着眼于卷积神经网络,调研了:(1)网络表达可视化方法;(2)网络表达的诊断方法;(3)自动解构解释卷积神经网络的方法;(4)学习中层特征表达可解释的神经网络的方法; (5)基于网络可解释性的中层对端的深度学习算法。最后,讨论了可解释性人工智能未来可能的发展趋势。

关键词: 人工智能;深度学习;可解释性模型    

标题 作者 时间 类型 操作

Structural performance assessment of GFRP elastic gridshells by machine learning interpretability methods

Soheila KOOKALANI; Bin CHENG; Jose Luis Chavez TORRES

期刊论文

Novel interpretable mechanism of neural networks based on network decoupling method

期刊论文

深度学习中的视觉可解释性

Quan-shi ZHANG, Song-chun ZHU

期刊论文